Course Syllabus & Notes
This piano app listens and corrects you--and gives you 5 years to master it
When you purchase through links in our articles, we may earn a small commission. A 5-year flowkey Classic Plan is $99.99 (MSRP $899). Trying to teach yourself piano usually breaks down at the same point: you can follow along with sheet music or a video, but you can't verify if you're doing it right. And, honestly, who wants to take formal lessons every week? Instead, there's an app for that: flowkey, and it turns your keyboard or piano into something closer to an interactive lesson setup.
- Information Technology > Security & Privacy (0.77)
- Leisure & Entertainment > Games > Computer Games (0.57)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Hardware (0.92)
Get Office 2024 & training courses for just 114
When you purchase through links in our articles, we may earn a small commission. Get Microsoft Office 2024 Home & Business plus an 8-course training bundle for hundreds off. Many people use Microsoft Office every day--but not always to its full potential. This bundle pairs Microsoft Office 2024 Home & Business with an 8-course training program designed to close that gap. That includes topics like Excel formulas, workflow efficiency, and even how to integrate tools like ChatGPT into your daily work.
- Education (0.93)
- Information Technology > Security & Privacy (0.82)
- Leisure & Entertainment > Games > Computer Games (0.62)
- Information Technology > Hardware (0.97)
- Information Technology > Artificial Intelligence > Machine Learning (0.36)
The Download: supercharged scams and studying AI healthcare
Plus: DeepSeek has unveiled its long-awaited new AI model. When ChatGPT was released in late 2022, it showed how easily generative AI could create human-like text. This quickly caught the eye of cybercriminals, who began using LLMs to compose malicious emails. Since then, they've adopted AI for everything from turbocharged phishing and hyperrealistic deepfakes to automated vulnerability scans. Many organizations are now struggling to cope with the sheer volume of cyberattacks. AI is making them faster, cheaper, and easier to carry out, a problem set to worsen as more cybercriminals adopt these tools--and their capabilities improve.
- North America > United States > New York (0.05)
- North America > United States > Massachusetts (0.05)
- Europe > Norway (0.05)
- Asia > China (0.05)
At 'AI Coachella,' Stanford Students Line Up to Learn From Silicon Valley Royalty
CS 153 has gone viral on the Palo Alto campus--and on X. Not everyone is happy about it. As thousands of influencers descended on southern California earlier this month for the annual Coachella Music Festival, a very Silicon Valley program dubbed "AI Coachella" was taking shape a few hundred miles north in Palo Alto. The class, CS 153, is one of Stanford's buzziest offerings this semester, and like the music festival, it features a star-studded lineup of celebrities--in this case, not pop artists, but Big Tech CEOs. The course is co-taught by Anjney Midha, a former Andreessen Horowitz general partner, and Michael Abbott, Apple's former VP of engineering for cloud services.
- North America > United States > California > Santa Clara County > Palo Alto (0.45)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Education (1.00)
- Information Technology > Services (0.34)
Doubly Outlier-Robust Online Infinite Hidden Markov Model
Yiu, Horace, Sánchez-Betancourt, Leandro, Cartea, Álvaro, Duran-Martin, Gerardo
We derive a robust update rule for the online infinite hidden Markov model (iHMM) for when the streaming data contains outliers and the model is misspecified. Leveraging recent advances in generalised Bayesian inference, we define robustness via the posterior influence function (PIF), and provide conditions under which the online iHMM has bounded PIF. Imposing robustness inevitably induces an adaptation lag for regime switching. Our method, which is called Batched Robust iHMM (BR-iHMM), balances adaptivity and robustness with two additional tunable parameters. Across limit order book data, hourly electricity demand, and a synthetic high-dimensional linear system, BR-iHMM reduces one-step-ahead forecasting error by up to 67% relative to competing online Bayesian methods. Together with theoretical guarantees of bounded PIF, our results highlight the practicality of our approach for both forecasting and interpretable online learning.
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom (0.04)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.67)
- Energy > Power Industry (0.34)
- Education > Educational Setting > Online (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
Deep Learning for Sequential Decision Making under Uncertainty: Foundations, Frameworks, and Frontiers
Artificial intelligence (AI) is moving increasingly beyond prediction to support decisions in complex, uncertain, and dynamic environments. This shift creates a natural intersection with operations research and management sciences (OR/MS), which have long offered conceptual and methodological foundations for sequential decision-making under uncertainty. At the same time, recent advances in deep learning, including feedforward neural networks, LSTMs, transformers, and deep reinforcement learning, have expanded the scope of data-driven modeling and opened new possibilities for large-scale decision systems. This tutorial presents an OR/MS-centered perspective on deep learning for sequential decision-making under uncertainty. Its central premise is that deep learning is valuable not as a replacement for optimization, but as a complement to it. Deep learning brings adaptability and scalable approximation, whereas OR/MS provides the structural rigor needed to represent constraints, recourse, and uncertainty. The tutorial reviews key decision-making foundations, connects them to the major neural architectures in modern AI, and discusses leading approaches to integrating learning and optimization. It also highlights emerging impact in domains such as supply chains, healthcare and epidemic response, agriculture, energy, and autonomous operations. More broadly, it frames these developments as part of a wider transition from predictive AI toward decision-capable AI and highlights the role of OR/MS in shaping the next generation of integrated learning--optimization systems.
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Belmont (0.04)
- (7 more...)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- Energy (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Is Schoolwork Optional Now?
Education is on the verge of becoming fully automated. William Liu is grateful that he finished high school when he did. If the latest AI tools had been around then, he told me, he might have been tempted to use them to do his homework. Liu, now a sophomore at Stanford, finished high school all the way back in 2024. "I have a younger sibling who is just graduating high school," he said.
- North America > United States > Mississippi (0.05)
- North America > United States > California (0.05)
Generative models for decision-making under distributional shift
Cheng, Xiuyuan, Zhu, Yunqin, Xie, Yao
Many data-driven decision problems are formulated using a nominal distribution estimated from historical data, while performance is ultimately determined by a deployment distribution that may be shifted, context-dependent, partially observed, or stress-induced. This tutorial presents modern generative models, particularly flow- and score-based methods, as mathematical tools for constructing decision-relevant distributions. From an operations research perspective, their primary value lies not in unconstrained sample synthesis but in representing and transforming distributions through transport maps, velocity fields, score fields, and guided stochastic dynamics. We present a unified framework based on pushforward maps, continuity, Fokker-Planck equations, Wasserstein geometry, and optimization in probability space. Within this framework, generative models can be used to learn nominal uncertainty, construct stressed or least-favorable distributions for robustness, and produce conditional or posterior distributions under side information and partial observation. We also highlight representative theoretical guarantees, including forward-reverse convergence for iterative flow models, first-order minimax analysis in transport-map space, and error-transfer bounds for posterior sampling with generative priors. The tutorial provides a principled introduction to using generative models for scenario generation, robust decision-making, uncertainty quantification, and related problems under distributional shift.
- North America > United States > Georgia > Rockdale County (0.04)
- North America > United States > Arkansas > Cross County (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > France > Grand Est > Meurthe-et-Moselle > Nancy (0.04)
- Energy (0.94)
- Banking & Finance > Trading (0.46)
Persistence diagrams of random matrices via Morse theory: universality and a new spectral diagnostic
We prove that the persistence diagram of the sublevel set filtration of the quadratic form f(x) = x^T M x restricted to the unit sphere S^{n-1} is analytically determined by the eigenvalues of the symmetric matrix M. By Morse theory, the diagram has exactly n-1 finite bars, with the k-th bar living in homological dimension k-1 and having length equal to the k-th eigenvalue spacing s_k = λ_{k+1} - λ_k. This identification transfers random matrix theory (RMT) universality to persistence diagram universality: for matrices drawn from the Gaussian Orthogonal Ensemble (GOE), we derive the closed-form persistence entropy PE = log(8n/π) - 1, and verify numerically that the coefficient of variation of persistence statistics decays as n^{-0.6}. Different random matrix ensembles (GOE, GUE, Wishart) produce distinct universal persistence diagrams, providing topological fingerprints of RMT universality classes. As a practical consequence, we show that persistence entropy outperforms the standard level spacing ratio \langle r \rangle for discriminating GOE from GUE matrices (AUC 0.978 vs. 0.952 at n = 100, non-overlapping bootstrap 95% CIs), and detects global spectral perturbations in the Rosenzweig-Porter model to which \langle r \rangle is blind. These results establish persistence entropy as a new spectral diagnostic that captures complementary information to existing RMT tools.
- North America > United States (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Research Report (0.50)
- Instructional Material > Course Syllabus & Notes (0.47)
Virtual Class Enhanced Discriminative Embedding Learning
Recently, learning discriminative features to improve the recognition performances gradually becomes the primary goal of deep learning, and numerous remarkable works have emerged. In this paper, we propose a novel yet extremely simple method Virtual Softmax to enhance the discriminative property of learned features by injecting a dynamic virtual negative class into the original softmax. Injecting virtual class aims to enlarge inter-class margin and compress intra-class distribution by strengthening the decision boundary constraint. Although it seems weird to optimize with this additional virtual class, we show that our method derives from an intuitive and clear motivation, and it indeed encourages the features to be more compact and separable. This paper empirically and experimentally demonstrates the superiority of Virtual Softmax, improving the performances on a variety of object classification and face verification tasks.
- Instructional Material > Online (0.90)
- Instructional Material > Course Syllabus & Notes (0.90)